Search Results: "cyb"

5 December 2020

Thorsten Alteholz: My Debian Activities in November 2020

FTP master Unfortunately a day only has 24h. As the freeze is approaching, I had to concentrate a bit more on keeping my packages in shape. So this month I only accepted nine packages. The good news, I rejected no package. The overall number of packages that got accepted was 328. Debian LTS This was my seventy-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 22.75h. During that time I did LTS uploads of: I also started to work on x11vnc and slirp. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the twenty ninth ELTS month. During my allocated time I uploaded: Unfortunately I also had to give back some hours. Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions of: I fixed one or two bugs in: I improved packaging of: and there have been even some new packages: As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don t hesitate, start to squash :-). The announcement on the mailing list can be found here.

21 October 2020

Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do. This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.
Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical? A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source base layer of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society s lifelines, and CIP aims to contribute to and support these important pillars of modern society.
Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today? A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.
Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals? A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.
Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific success stories that the CIP is particularly proud of? A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems. In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.
Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency? A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.
Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims? A: Collaboration with other projects is an essential part of how CIP operates we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an upstream first policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.
Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context? A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.
Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look? A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

12 October 2020

Markus Koschany: My Free Software Activities in September 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in October) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
warzone2100
Debian Java
pdfsam
Misc Debian LTS This was my 55. month as a paid contributor and I have been paid to work 31,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 28. month and I have been paid to work 15 hours on ELTS. Thanks for reading and see you next time.

12 September 2020

Markus Koschany: My Free Software Activities in August 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
teeworlds
Debian Java Misc Debian LTS This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 27. month and I have been paid to work 14,25 hours on ELTS. Thanks for reading and see you next time.

31 August 2020

Jacob Adams: Command Line 101

How to Work in a Text-Only Environment.

What is this thing? When you first open a command-line (note that I use the terms command-line and shell interchangably here, they re basically the same, but command-line is the more general term, and shell is the name for the program that executes commands for you) you ll see something like this: thisfolder This line is called a command prompt and it tells you four pieces of information:
  1. jaadams: The username of the user that is currently running this shell.
  2. bg7: The name of the computer that this shell is running on, important for when you start accessing shells on remote machines.
  3. /tmp/thisfolder: The folder or directory that your shell is currently running in. Like a file explorer (like Window s Explorer or Mac s Finder) a shell always has a working directory, from which all relative paths (see sidenote below) are resolved.
When you first opened a shell, however, you might notice that is looks more like this: home This is a shorthand notation that the shell uses to make this output shorter when possible. ~ stands for your home directory, usually /home/<username>. Like C:\Users\<username>\ on Windows or /Users/<username> on Mac, this directory is where all your files should go by default. Thus a command prompt like this: downloads actually tells you that you are currently in the /home/jaadams/Downloads directory.

Sidenote: The Unix Filesystem and Relative Paths folders on Linux and other Unix-derived systems like MacOS are usually called directories. These directories are represented by paths, strings that indicate where the directory is on the filesystem. The one unusual part is the so-called root directory . All files are stored in this folder or directories under it. Its path is just / and there are no directories above it. For example, the directory called home typically contains all user directories. This is stored in the root directory, and each users specific data is stored in a directory named after that user under home. Thus, the home directory of the user jacob is typically /home/jacob, the directory jacob under the home directory stored in the root directory /. If you re interested in more details about what goes in what directory, man hier has the basics and the Filesystem Hierarchy Standard governs the layout of the filesystem on most Linux distributions. You don t always have to use the full path, however. If the path does not begin with a /, it is assumed that the path actually begins with the path of the current directory. So if you use a path like my/folders/here, and you re in the /home/jacob directory, the path will be treated like /home/jacob/my/folders/here. Each folder also contains the symbolic links .. and .. Symbolic links are a very powerful kind of file that is actually a reference to another file. .. always represents the parent directory of the current directory, so /home/jacob/.. links to /home/. . always links to the current directory, so /home/jacob/. links to /home/jacob.

Running commands To run a command from the command prompt, you type its name and then usually some arguments to tell it what to do. For example, the echo command displays the text passed as arguments.
jacob@lovelace/home/jacob$ echo hello world
hello world
Arguments to commands are space-separated, so in the previous example hello is the first argument and world is the second. If you need an argument to contain spaces, you ll want to put quotes around it, echo "like so". Certain arguments are called flags , or options (options if they take another argument, flags otherwise) usually prefixed with a hyphen, and they change the way a program operates. For example, the ls command outputs the contents of a directory passed as an argument, but if you add -l before the directory, it will give you more details on the files in that directory.
jacob@lovelace/tmp/test$ ls /tmp/test
1  2  3  4  5  6
jacob@lovelace/tmp/test$ ls -l /tmp/test
total 0
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 1
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 2
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 3
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 4
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 5
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 6
jacob@lovelace/tmp/test$
Most commands take different flags to change their behavior in various ways.

File Management
  • cd <path>: Change the current directory of the running shell to <path>.
  • ls <path>: Output the contents of <path>. If no path is passed, it prints the contents of the current directory.
  • touch <filename>: create an new empty file called <filename>. Used on an existing file, it updates the file s last accessed and modified times. Most text editors can also create a new file for you, which is probably more useful.
  • mkdir <directory>: Create a new folder/directory at path <directory>.
  • mv <src> <dest>: Move a file or directory at path <src> to <dest>.
  • cp <src> <dest>: Copy a file or directory at path <src> to <dest>.
  • rm <file>: Remove a file at path <file>.
  • zip -r <zipfile> <contents...>: Create a zip file <zipfile> with contents <contents>. <contents> can be multiple arguments, and you ll usually want to use the -r argument when including directories in your zipfile, as otherwise only the directory will be included and not the files and directories within it.

Searching
  • grep <thing> <file>: Look for the string <thing> in <file>. If no <file> is passed it searches standard input.
  • find <path> -name <name>: Find a file or directory called <name> somwhere under <path>. This command is actually very powerful, but also very complex. For example you can delete all files in a directory older than 30 days with:
    find -mtime +30 -exec rm  \;
    
  • locate <name>: A much easier to use command to find a file with a given name, but it is not usually installed by default.

Outputting Files
  • cat <files...>: Output (concatenate) all the files passed as arguments.
  • head <file>: Output the beginning of <file>
  • tail <file>: Output the end of <file>

How to Find the Right Command All commands (at least on sane Linux distributions like Debian or Ubuntu) are documented with a manual page, in man section 1 (for more information on manual sections, run man intro). This can be accessed using man <command> You can search for the right command using the -k flag, as in man -k <search>. You can also view manual pages in your browser, on sites like https://manpages.debian.org or https://linux.die.net/man. This is not always helpful, however, because some command s descriptions are not particularly useful, and also there are a lot of manual pages, which can make searching for a specific one difficult. For example, finding the right command to search inside text files is quite difficult via man (it s grep). When you can t find what you need with man I recommend falling back to searching the Internet. There are lots of bad Linux tutorials out there, but here are some reputable sources I recommend:
  • https://www.cyberciti.biz: nixCraft has excellent tutorials on all things Linux
  • Hosting providers like Digital Ocean or Linode: Good intro documentation, but can sometimes be outdated
  • https://tldp.org: The Linux Documentation project is great, but it can also be a little outdated sometimes.
  • https://stackoverflow.com: Oftentimes has great answers, but quality varies wildly since anyone can answer.
These are certainly not the only options but they re the sources I would recommend when available.

How to Read a Manual Page Manual pages consist of a series of sections, each with a specific purpose. Instead of attempting to write my own description here, I m going to borrow the excellent one from The Linux Documentation Project
The NAME section is the only required section. Man pages without a name section are as useful as refrigerators at the north pole. This section also has a standardized format consisting of a comma-separated list of program or function names, followed by a dash, followed by a short (usually one line) description of the functionality the program (or function, or file) is supposed to provide. By means of makewhatis(8), the name sections make it into the whatis database files. Makewhatis is the reason the name section must exist, and why it must adhere to the format I described. (Formatting explanation cut for brevity) The SYNOPSIS section is intended to give a short overview on available program options. For functions this sections lists corresponding include files and the prototype so the programmer knows the type and number of arguments as well as the return type. The DESCRIPTION section eloquently explains why your sequence of 0s and 1s is worth anything at all. Here s where you write down all your knowledge. This is the Hall Of Fame. Win other programmers and users admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs. The OPTIONS section gives a description of how each option affects program behaviour. You knew that, didn t you? The FILES section lists files the program or function uses. For example, it lists configuration files, startup files, and files the program directly operates on. (Cut details about installing files) The ENVIRONMENT section lists all environment variables that affect your program or function and tells how, of course. Most commonly the variables will hold pathnames, filenames or default options. The DIAGNOSTICS section should give an overview of the most common error messages from your program and how to cope with them. There s no need to explain system error error messages (from perror(3)) or fatal signals (from psignal(3)) as they can appear during execution of any program. The BUGS section should ideally be non-existent. If you re brave, you can describe here the limitations, known inconveniences and features that others may regard as misfeatures. If you re not so brave, rename it the TO DO section ;-) The AUTHOR section is nice to have in case there are gross errors in the documentation or program behaviour (Bzzt!) and you want to mail a bug report. The SEE ALSO section is a list of related man pages in alphabetical order. Conventionally, it is the last section.

Remote Access One of the more powerful uses of the shell is through ssh, the secure shell. This allows you to remotely connect to another computer and run a shell on that machine:
user@host:~$ ssh other@example.com
other@example:~$
The prompt changes to reflect the change in user and host, as you can see in the example above. This allows you to work in a shell on that machine as if it was right in front of you.

Moving Files Between Machines There are several ways you can move files between machines over ssh. The first and easiest is scp, which works much like the cp command except that paths can also take a user@host argument to move files across computers. For example, if you wanted to move a file test.txt to your home directory on another machine, the command would look like:
scp test.txt other@example.com:
(The home directory is the default path) Otherwise you can move files by reversing the order of the arguments and put a path after the colon to move files from another directory on the remote host. For example, if you wanted to fetch the file /etc/issue.net from example.com:
scp other@example.com:/etc/issue.net .
Another option is the sftp command, which gives you a very simple shell-like interface in which you can cd and ls, before either puting files onto the local machine or geting files off of it. The final and most powerful option is rsync which syncs the contents of one directory to another, and doesn t copy files that haven t changed. It s powerful and complex, however, so I recommend reading the USAGE section of its man page.

Long-Running Commands The one problem with ssh is that it will stop any command running in your shell when you disconnect. If you want to leave something on and come back later then this can be a problem. This is where terminal multiplexers come in. tmux and screen both allow you to run a shell in a safe environment where it will continue even if you disconnect from it. You do this by running the command without any arguments, i.e. just tmux or just screen. In tmux you can disconnect from the current session by pressing Ctrl+b then d, and reattach with the tmux attach command. screen works similarly, but with Ctrl+a instead of b and screen -r to reattach.

Command Inputs and Outputs Arguments are not the only way to pass input to a command. They can also take input from what s called standard input , which the shell usually connects to your keyboard. Output can go to two places, standard output and standard error, both of which are directed to the screen by default.

Redirecting I/O Note that I said above that standard input/output/error are only usually connected to the keyboard and the terminal? This is because you can redirect them to other places with the shell operators <, > and the very powerful .

File redirects The operators < and > redirect the input and output of a command to a file. For example, if you wanted a file called list.txt that contained a list of all the files in a directory /this/one/here you could use:
ls /this/one/here > list.txt

Pipelines The pipe character, , allows you to direct the output of one command into the input of another. This can be very powerful. For example, the following pipeline lists the contents of the current directory searches for the string test , then counts the number of results. (wc -l counts the number of lines in its input)
ls   grep test   wc -l
For a better, but even more contrived example, say you have a file myfile, with a bunch of lines of potentially duplicated and unsorted data
test
test
1234
4567
1234
You can sort it and output only the unique lines with sort and uniq:
$ uniq < myfile   sort
1234
1234
4567
test

Save Yourself Some Typing: Globs and Tab-Completion Sometimes you don t want to type out the whole filename when writing out a command. The shell can help you here by autocompleting when you press the tab key. If you have a whole bunch of files with the same suffix, you can refer to them when writing arguments as *.suffix. This also works with prefixes, prefix*, and in fact you can put a * anywhere, *middle*. The shell will expand that * into all the files in that directory that match your criteria (ending with a specific suffix, starting with a specific prefix, and so on) and pass each file as a separate argument to the command. For example, if I have a series of files called 1.txt, 2.txt, and so on up to 9, each containing just the number for which it s named, I could use cat to output all of them like so:
jacob@lovelace/tmp/numbers$ ls
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt	9.txt
jacob@lovelace/tmp/numbers$ cat *.txt
1
2
3
4
5
6
7
8
9
Also the ~ shorthand mentioned above that refers to your home directory can be used when passing a path as an argument to a command.

Ifs and For loops The files in the above example were generated with the following shell commands:
for i in 1 2 3 4 5 6 7 8 9
do
echo $i > $i.txt
done
But I ll have to save variables, conditionals and loops for another day because this is already too long. Needless to say the shell is a full programming language, although a very ugly and dangerous one.

14 August 2020

Markus Koschany: My Free Software Activities in July 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in August) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 53. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 26. month and I have been paid to work 13,25 hours on ELTS. Thanks for reading and see you next time.

14 July 2020

Markus Koschany: My Free Software Activities in June 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in July) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Short news
Debian Java Misc Debian LTS This was my 52. month as a paid contributor and I have been paid to work 60 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Thanks for reading and see you next time.

11 June 2020

Markus Koschany: My Free Software Activities in May 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in June) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 51. month as a paid contributor and I have been paid to work 25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 Wheezy . This was my 24. month and I have been paid to work 9,25 hours on ELTS. Thanks for reading and see you next time.

22 April 2020

Jonathan Dowland: SUPERHOT

Continuing a series of blog posts about casual Nintendo Switch games, next in the series is SUPERHOT. Normally 19.99, I picked it up for 13.99 in a sale. That's a little bit more than I would usually pay for a casual game. SUPERHOT first came on my radar because someone I know from a baby group worked on their VR port in some capacity.
Slow-motion buckshot Slow-motion buckshot
A first-person shooter, SUPERHOT's USP is that time only progresses when you move well, nearly. Time is slowed to a complete crawl when you are not moving. The game's visual style is very distinctive: Almost everything is a washed out grey or white colour and porcelean-like texture, except weapons and objects you can interact with, which are a matt black, and enemies, which are a bright red. It reminds me a lot of the 1992 Amiga game Robocop 3.
Robocop 3 Robocop 3
The play-style is very reminiscent of the "Bullet Time" sequences in The Matrix seemingly impossibly overwhelming odds deftly manoeuvred through thanks to superhuman reaction times. The game has a relatively short campaign of little vignettes, linked together by a cyberpunk narrative. The game is sometimes criticised for the short campaign, but for me that's ideal. And the vignettes being short and quite standalone suits my play requirements very well.
Amiga easter-egg Amiga easter-egg
The narrative interspersed between the play scenarios is a little bit over-long, and you can spend an unreasonable amount of time bashing buttons to get through it. Despite that it's a moderately interesting story. Once you've beaten the campaign, you can go back and play any of the scenarios again, or try the newly unlocked endless mode. I haven't tried that yet. The original prototype for the game is a free-to-play in-browser demo, available here. On Windows PC, there's a sequel-of-sorts in the works called MIND CONTROL DELETE with a lot of new features to add replay value.

11 March 2020

Markus Koschany: My Free Software Activities in February 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in March) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Debian Java Misc Debian LTS This was my 48. month as a paid contributor and I have been paid to work 10 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 Wheezy . This was my 21. month and I have been paid to work 8 hours on ELTS. Thanks for reading and see you next time.

8 March 2020

Ulrike Uhlig: Implementing feedback into our work culture

Everywhere I worked in the past, the only feedback that was asked of employees was during a yearly evaluation meeting. These meetings always felt to me like talking to Santa Claus and his Knecht Ruprecht. I was asked: Were you a good employee last year? If yes, we might give you a raise. If no, admit all your mistakes now, even if we already know everything, ho ho ho. And don't you talk about your feelings, or your well-being, or say anything about the organization's (invisible) hierarchies, otherwise we will put you on the "naughty list", and that's it with candy. The yearly evaluation set aside, there was no other place to give feedback (except by escalating a matter by involving the Labour Court, if you happen to work in France, or going on strike, also mostly part of French culture). Feedback allows to reflect on work processes, to situate oneself, and to get closure. How surprised was I when, some years ago, I received an email from a collaborator asking me: kindly for just few paragraphs (doesn t have to be anything long) to hear from you about the process, your work, challenges you had, or anything else you want to mention there.. Wow!
This simple email allowed me to reflect about: How do we get to a feedback culture? How do we get from German Christmas folklore, protestant work ethics, and the deeply rooted principles of disciplining and punishing to a feedback culture on eye level? It sounds a bit like going from the dark ages to a really cool science fiction utopia with universal peace, telepathy, and magic between all sentient beings on all inhabited planets in the cosmos at least that's how I imagined it as a child, just like some of my heroes did: the cosmonaut girl who saves Earth, the boy who talks to space flowers that give him the capacity to fly, and the little onion who fights for justice (the Italian author was so popular on our side of the iron curtain that a soviet astronomer named a minor planet after him. His wife meanwhile immortalized Karl Marx.) and some romantic part of me hangs on to these ideas. Feedback is not always easy to hear and to give. I-Statements Giving and receiving feedback is hard in a culture where people learnt that when they made a mistake they won't get candy. Or that they have to constantly please other people because they are not worthy by themselves. This can lead to people putting mistakes on one another. Every sentence that starts with You are has the potential of creating a lot of hurt, and anger. Have you heard of I-Statements? They have very powerfully changed my world view, as they shift from accusation to ownership of feelings. So instead of telling someone Your writing style is impossible! You really need to change the way you write., with an I-Statement one could say I have a hard time understanding that part of the text. I-Statements make cooperation possible. Listening actively Feedback is not about being right or wrong, it's first of all about being able to see how another person has experienced a situation. Active listening is a tool that helps with understanding. It might seem easy, but needs quite some practice and a safe space. One part of active listening is to restate what you hear the other person say (by mirroring, or paraphrasing), to make sure you understood, and make sure they know you understood what they were trying to say. You can practise this: in a circle of three people, have one person tell how they experienced a (possibly conflictual) situation, have one person do the active listening, and the third person observing in order to give feedback to the active listener about how they did. Then switch roles, for example clockwise, until everyone has had every role. Encouraging continuous feedback A working feedback culture does not take place only once a year. It needs to be a continuous process and therefore implemented in meetings, teams, eventually on the level of a project. Making clear: Who can I talk to if I experience an issue? is not different than telling developers and users where and how they can report a bug, or request a feature. A safe space to express feedback is key. Encouraging multiple feedback channels Some people might feel less empowered or more vulnerable over a channel than others. Make sure to have different channels for receiving feedback such as email, a point on each meeting agenda, a one-to-one meeting, or a poll. Giving and receiving feedback on eye level In a workplace that does not have a working feedback culture, feedback is easily perceived as policing. If your feedback process consists of asking people to upload a form to a cloud server every 3 months, and you notice that some people don't do it, you could ask yourself if there is an issue with how your colleagues perceive giving feedback in your organization. Do you meet your colleagues on eye level when it comes to feedback? Do you take feedback seriously and act on it? How do you deal with unpleasant feedback? How do you react when colleagues don't meet your expectations? Can people participate in the feedback process within their paid work time? Did everybody understand what the feedback process is about? Don't jump to conclusions Humans are problem solving animals. When someone comes to us with a problem, the first thing we want to do is to solve it, to help them. But sometimes this is uncalled for, it can be disempowering, or prevent people from acquiring competences themselves, and it can even break people's boundaries. So instead of asking What can I do for you?, try asking What do you need right now? People will often reply something that you did not expect at all. Acting on feedback Make sure you have a process to collect feedback (possibly anonymized) and to regularly evaluate if the organization needs to implement changes to thrive. Conclusion I stumbled upon Hans-Christian Dany's critique of feedback again recently, therefore I need to make it clear: I'm not interested in improving capitalist work culture by using cybernetic principles of self-regulation through feedback. Instead, I am interested in improving cooperation between people who work either individually or in organizations on eye level. In this framework, I see feedback processes as profoundly anti-capitalist methods to improve cooperation while working towards common good. Implementing these ideas should be doable: there are organizations who provide feedback training for example. This document, initially aiming at people in cooperatives, gives many insights on communication skills and feedback, the agile and UX worlds do feedback "retrospectives". and otherwise I'll have to go and write science fiction stories for children myself.

10 October 2017

Carl Chenet: The Slack Threat

During a long era, electronic mail was the main communication tool for enterprises. Slack, which offer public or private group discussion boards and instant messaging between two people, challenge its position, especially in the IT industry. Not only Slack has features known and used since IRC launch in the late 80s, but Slack also offers file sending and sharing, code quoting, and it indexing for ulterior searches everything that goes through the application. Slack is also modular with numerous plug-in to easily add new features. Using the Software-As-A-Service (SAAS) model, Slack basic version is free, and users pay for options. Slack is now considered by the Github generation like the new main enterprise communication tool. As I did in my previous article on the Github threat, this one won t promote Slask s advantages, as many other articles have already covered all these points ad nauseam, but to show the other side and to warn the companies using this service about its inherent risks. So far, these risks have been ignored, sometimes voluntary in the name of the It works ideology. Neglecting all economic and safety consideration, neglecting all threat to privacy and individual freedom. We ll see about them below.

Github, a software forge as a SAAS, with all the advantage but also all the risk of its economic model

All your company communication since its creation When a start-up chooses Slack, all of its internal communication will be stored by Slack. When someone uses this service, the simple fact to chat through it means that the whole communication is archived. One may point that within the basic Slack offer, only the last 10.000 messages can be read and searched. Bad argument. Slack stored every message and every file shared as it pleases. We ll see below this application behavior is of capital importance in the Slack threat to enterprises. And the problem is the same for all other companies which choose Slack at one point or another. If they replace their traditional communication method with it, Slack will have access to capital data, not only in volume, but also because of their value for the company itself Or anyone interested in this company life. Search Your Entire Archive One of the main arguments to use Slack is its Search your entire archive feature. One can search almost anything one can think of. Why? Because everything is indexed. Your team chat archive or the more or less confidential documents exchanged with the accountant department; everything is in it in order to provide the most effective search tool.

The search bar, well-known by Slack users

We can t deny it s a very attractive feature for everyone inside the company. But it is also a very attractive feature for everyone outside of the company who would want to know more about its internal life. Even more if you re looking for a specific subject. If Slack is the main communication tool of your company, and if as I ve experienced in my professional life, some teams prefer to use it than to go to the office next door or even bug you to put the information on the dedicated channel, one can easily deduce that nothing in this type of company escape Slack. The automatic indexation and the search feature efficiency are excellent tools to get all the information needed, in quantity and in quality. As such, it s a great social engineering tool for everyone who has access to it, with a history as old as the use of Slack as a communication tool in the company. Across borders And Beyond! Slack is a Web service which uses mainly Amazon Web services and most specially Cloudfront, as stated by the available information on Slack infrastructure. Even without a complete study of said infrastructure, it s easy to state that all the data regarding many innovative global companies around the world (and some of them including for all their internal communication since their creation) are located in the United States, or at least in the hands of a US company, which must follow US laws, a country with a well-known history of large scale industrial espionage, as the whistleblower Edward Snowden demonstrated it in 2013 and where company data access has no restriction under the Patriot Act, as in the Microsoft case (2014) where data stored in Ireland by the Redmond software editor have been given to US authorities.

Edward Snowden, an individual and corporate freedom fighter

As such, Slack s automatic indexation and search tool are a boon for anyone spy agency or hacker which get authorized access to it. To trust a third party with all, or at least most of, your internal corporate communication is a certain risk for your company if the said third party doesn t follow the same regulations as yours or if it has different interests, from a data security point of view or more globally on its competitiveness. A badly timed data leak can be catastrophic. What s the point of secretly preparing a new product launch or an aggressive takeover if all your recent Slack conversations have leaked, including your secret plans? What if Slack is hacked? First let s remember that even if a cyber attack may appear as a rare or hypothetical scenario to a badly informed and hurried manager, it is far from being as rare as she or he believes it (or wants to believe it). Infrastructure hacking is quite common, as a regular visit to Hacker News will give you multiple evidence. And Slack itself has already been hacked. February 2015: Slack is the victim during four days of a cyber attack, which was made public by the company in March. Officially, the unauthorized access was limited to information on the users profiles. It is impossible to measure exactly what and who was impacted by this attack. In a recent announcement, Yahoo confessed that these 3 billion accounts (you ve read well: 3 billions) were compromised late 2014!

Yahoo, the company which suffered the largest recorded cyberattack regarding the compromised account numbers

Officially, Slack stated that No financial or payment information was accessed or compromised in this attack. Which is, and by far, the least interesting of all data stored within Slack! With company internal communication indexed sometimes from the very beginning of said company and searchable, Slack may be a potential target for cybercriminal not looking for its users financial credentials but more their internal data already in a usable format. One can imagine Slack must give information on a massive data leak, which can t be ignored. But what would happen if only one Slack user is the victim of said leak? The Free Alternative Solutions As we demonstrated above, companies need to find an alternative solution to Slack, one they can host themselves to reduce data leaks and industrial espionage and dependency on the Internet connection. Luckily, Slack success created its own copycats, some of them being also free software. Rocket.chat is one of them. Its comprehensive service offers chat rooms, direct messages and file sharing but also videoconferencing and screen sharing, and even most features. Check their dedicated page. You can also try an online demo. And even more, Rocket Chat has a very simple extension system and an API. Mattermost is another service which has the advantages of proximity and of compatibility with Slack. It offers numerous features including the main expected by this type of software. It also offers numerous apps and plug-ins to interact with online services, software forges, and continuous integration tools. It works In the introduction, we discussed the It works effect, usually invoked to dispel any arguments about data protection and exchange confidentiality we discussed in this article. True, one single developer can ask: why worry about it? All I want is to chat with my colleagues and send files! Because Slack service subscription in the long term put the company continuously at risk. Maybe it s not the employees place to worry about it, they just have to do their job the more efficiently possible. On the other side, the company management, usually non-technical, may not be aware of what risks will threaten their company with this technical choice. The technical management may pretend to be omniscient, nobody is fooled. Either someone from the direction will ask the right question (where are our data and who can access them?) or someone from the technical side alert them officially on these problems. This is this technical audience, even if not always heard by their direction, which is the target of this article. May they find in it the right arguments to be convincing. We hope that the several points we developed in this article will help you to make the right choice. About Me Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker. Follow me on social networks Translated from French by St phanie Chaptal. Original article written in October 2016.

25 September 2017

Chris Lamb: Lintian: We are all Perl developers now

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers. I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check. Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3. Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:
lintian (2.5.53) unstable; urgency=medium
  The "we are all Perl developers now" release.
  * Summary of tag changes:
    + Added:
      - alternatively-build-depends-on-python-sphinx-and-python3-sphinx
      - build-depends-on-python-sphinx-only
      - dependency-on-python-version-marked-for-end-of-life
      - maintainer-script-interpreter
      - missing-call-to-dpkg-maintscript-helper
      - node-package-install-in-nodejs-rootdir
      - override-file-in-wrong-package
      - package-installs-java-bytecode
      - python-foo-but-no-python3-foo
      - script-needs-depends-on-sensible-utils
      - script-uses-deprecated-nodejs-location
      - transitional-package-should-be-oldlibs-optional
      - unnecessary-testsuite-autopkgtest-header
      - vcs-browser-links-to-empty-view
    + Removed:
      - debug-package-should-be-priority-extra
      - missing-classpath
      - transitional-package-should-be-oldlibs-extra
  * checks/apache2.pm:
    + [CL] Fix an apache2-unparsable-dependency false positive by allowing
      periods (".") in dependency names.  (Closes: #873701)
  * checks/binaries.pm:
    + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the
      description of the binary-file-built-without-LFS-support tag.
      (Closes: #874078)
  * checks/changes. pm,desc :
    + [CL] Ignore DFSG-repacked packages when checking for upstream
      source tarball signatures as they will never match by definition.
      (Closes: #871957)
    + [CL] Downgrade severity of orig-tarball-missing-upstream-signature
      from "E:" to "W:" as many common tools do not make including the
      signatures easy enough right now.  (Closes: #870722, #870069)
    + [CL] Expand the explanation of the
      orig-tarball-missing-upstream-signature tag to include the location
      of where dpkg-source will look. Thanks to Theodore Ts'o for the
      suggestion.
  * checks/copyright-file.pm:
    + [CL] Address a number of issues in copyright-year-in-future:
      - Prevent false positives in port numbers, email addresses, ISO
        standard numbers and matching specific and general street
        addresses.  (Closes: #869788)
      - Match all violating years in a line, not just the first (eg.
        "2000-2107").
      - Ignore meta copyright statements such as "Original Author". Thanks
        to Thorsten Alteholz for the bug report.  (Closes: #873323)
      - Expand testsuite.
  * checks/cruft. pm,desc :
    + [CL] Downgrade severity of file-contains-fixme-placeholder
      tag from "important" (ie. "E:") to "wishlist" (ie. "I:").
      Thanks to Gregor Herrmann for the suggestion.
    + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead
      of "substring" in mentions-deprecated-usr-lib-perl5-directory's
      description.  (Closes: #871767)
    + [CL] Don't check copyright_hints file for FIXME placeholders.
      (Closes: #872843)
    + [CL] Don't match quoted "FIXME" variants as they are almost always
      deliberate. Thanks to Adrian Bunk for the report.  (Closes: #870199)
    + [CL] Avoid false positives in missing source checks for "CSS Browser
      Selector".  (Closes: #874381)
  * checks/debhelper.pm:
    + [CL] Prevent a false positive of
      missing-build-dependency-for-dh_-command that can be exposed by
      following the advice for the recently added
      useless-autoreconf-build-depends tag.  (Closes: #869541)
  * checks/debian-readme. pm,desc :
    + [CL] Ensure readme-debian-contains-debmake-template also checks
      for templates "Automatically generated by debmake".
  * checks/description. desc,pm :
    + [CL] Clarify explanation of description-starts-with-leading-spaces
      tag. Thanks to Taylor Kline  for the report
      and patch.  (Closes: #849622)
    + [NT] Skip capitalization-error-in-description-synopsis for
      auto-generated packages (such as dbgsym packages).
  * checks/fields. desc,pm :
    + [CL] Ensure that python3-foo packages have "Section: python", not
      just python2-foo.  (Closes: #870272)
    + [RG] Do no longer require debug packages to be priority extra.
    + [BR] Use Lintian::Data for name/section mapping
    + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser.
      (Closes: #681713)
    + [NT] Transitional packages should now be "oldlibs/optional" rather
      than "oldlibs/extra".  The related tag has been renamed accordingly.
  * checks/filename-length.pm:
    + [NT] Skip the check on auto-generated binary packages (such as
      dbgsym packages).
  * checks/files. pm,desc :
    + [BR] Avoid privacy-breach-generic false positives for legal.xml.
    + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$
    + [CL] Check for packages shipping compiled Java class files. Thanks
      Carn  Draug .  (Closes: #873211)
    + [BR] Privacy breach is no longer experimental.
  * checks/init.d.desc:
    + [RG] Do not recommend a versioned dependency on lsb-base in
      init.d-script-needs-depends-on-lsb-base.  (Closes: #847144)
  * checks/java.pm:
    + [CL] Additionally consider .cljc files as code to avoid false-
      positive codeless-jar warnings.  (Closes: #870649)
    + [CL] Drop problematic missing-classpath check.  (Closes: #857123)
  * checks/menu-format.desc:
    + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry
      for "Link" and "Directory" .desktop files.  (Closes: #873702)
  * checks/python. pm,desc :
    + [CL] Split out Python checks from "scripts" check to a new, source
      check of type "source".
    + [CL] Check for python-foo without corresponding python3-foo packages
      to assist in Python 2.x deprecation.  (Closes: #870681)
    + [CL] Check for packages that Build-Depend on python-sphinx only.
      (Closes: #870730)
    + [CL] Check for packages that alternatively Build-Depend on the
      Python 2 and Python 3 versions of Sphinx.  (Closes: #870758)
    + [CL] Check for binary packages that depend on Python 2.x.
      (Closes: #870822)
  * checks/scripts.pm:
    + [CL] Correct false positives in
      unconditional-use-of-dpkg-statoverride by detecting "if !" as a
      valid shell prefix.  (Closes: #869587)
    + [CL] Check for missing calls to dpkg-maintscript-helper(1) in
      maintainer scripts.  (Closes: #872042)
    + [CL] Check for packages using sensible-utils without declaring a
      dependency after its split from debianutils.  (Closes: #872611)
    + [CL] Warn about scripts using "nodejs" as an interpreter now that
      nodejs provides /usr/bin/node.  (Closes: #873096)
    + [BR] Add a statistic tag giving interpreter.
  * checks/testsuite. desc,pm :
    + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field
      to debian/control as it is added when needed by dpkg-source(1)
      since dpkg 1.17.1.  (Closes: #865531)
    + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header
      in debian/control.
    + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite.
    + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite.
      (Closes: #873458)
    + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite.
      (Closes: #875985)
    + [CL] Update the description of unknown-testsuite to reflect that
      "autopkgtest" is not the only valid value; the referenced URL
      is out-of-date (filed as #876008).  (Closes: #876003)
  * data/binaries/embedded-libs:
    + [RG] Detect embedded copies of heimdal, libgxps, libquicktime,
      libsass, libytnef, and taglib.
    + [RG] Use an additional string to detect embedded copies of
      openjpeg2.  (Closes: #762956)
  * data/fields/name_section_mappings:
    + [BR] node- package section is javascript.
    + [CL] Apply patch from Guillem Jover to add more section mappings.
      (Closes: #874121)
  * data/fields/obsolete-packages:
    + [MR] Add dh-systemd.  (Closes: #872076)
  * data/fields/perl-provides:
    + [CL] Refresh perl provides.
  * data/fields/virtual-packages:
    + [CL] Update data file from archive. This fixes a false positive for
      "bacula-director".  (Closes: #835120)
  * data/files/obsolete-paths:
    + [CL] Add note to /etc/bash_completion.d entry regarding stricter
      filename requirements.  (Closes: #814599)
  * data/files/privacy-breaker-websites:
    + [BR] Detect custom donation logos like apache.
    + [BR] Detect generic counter website.
  * data/standards-version/release-dates:
    + [CL] Add 4.0.1 and 4.1.0 as known standards versions.
      (Closes: #875509)
  * debian/control:
    + [CL] Mention Debian Policy v4.1.0 in the description.
    + [CL] Add myself to Uploaders.
    + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from
      debian/tests/control existing.
  * commands/info.pm:
    + [CL] Add a --list-tags option to print all tags Lintian knows about.
      Thanks to Rajendra Gokhale for the suggestion.  (Closes: #779675)
  * commands/lintian.pm:
    + [CL] Apply patch from Maia Everett to avoid British spelling when
      using en_US locale.  (Closes: #868897)
  * lib/Lintian/Check.pm:
    + [CL] Stop emitting  maintainer,uploader -address-causes-mail-loops
      for @packages.debian.org addresses.  (Closes: #871575)
  * lib/Lintian/Collect/Binary.pm:
    + [NT] Introduce an "auto-generated" argument for "is_pkg_class".
  * lib/Lintian/Data.pm:
    + [CL] Modify Lintian::Data's "all" to always return keys in insertion
      order, dropping dependency on libtie-ixhash-perl.
  * helpers/coll/objdump-info-helper:
    + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29
      outputting symbols in a different format on ppc64el.
      (Closes: #869750)
  * t/tests/fields-perl-provides/tags:
    + [CL] Update expected output to match new Perl provides.
  * t/tests/files-privacybreach/*:
    + [CL] Add explicit test for packages including external fonts via
      the Google Font API. Thanks to Ian Jackson for the report.
      (Closes: #873434)
    + [CL] Add explicit test for packages including external fonts via
      the Typekit API via <script/> HTML tags.
  * t/tests/*/desc:
    + [CL] Add missing entries in "Test-For" fields to make
      development/testing workflow less error-prone.
  * private/generate-tag-summary:
    + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the
      abbreviated object name,  However, as this can be user-dependent,
      pass --abbrev=0 to ensure it does not vary between systems.  This
      also means we do not need to strip it ourselves.
  * private/refresh-*:
    + [CL] Use deb.debian.org as the default mirror.
    + [CL] Update locations of Contents-<arch> files; they are now
      namespaced by distribution (eg. "main").
 -- Chris Lamb <lamby@debian.org>  Wed, 20 Sep 2017 09:25:06 +0100

18 September 2017

Carl Chenet: The Github threat

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn t seem to fade away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we ll examine their validity.

1. Critical points

1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as out of date . The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers working space.

Uniforms always bring Army in my mind, here the Clone army

2 Critical points cross-examination

2.1 Regarding the centralization

2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp s SourceForge account was hacked by SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software

2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can t be limited only to the code. It s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system? One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost. Some also use Github as a bookmark place. They follow their favorite projects activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago. It s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software What s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I ve never thought about: laziness. For those who don t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions. Since when Free Software project facing a roadblock request for clemency and don t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don t like what Github offers? Switch to Gitlab. You don t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let s be crystal clear. I ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker. Follow me on social networks Translated from French by St phanie Chaptal. Original article written in 2015.

18 June 2017

Hideki Yamane: Debian9 release party in Tokyo

We celebrated Debian9 "stretch" release in Tokyo (thanks to Cybozu, Inc. for the place).








We enjoyed beer, wine, sake, soft drinks, pizza, sandwich, snacks and cake&coffee (Nicaraguan one, it reminds me DebConf12 :)

13 April 2017

Antoine Beaupr : New approaches to network fast paths

With the speed of network hardware now reaching 100 Gbps and distributed denial-of-service (DDoS) attacks going in the Tbps range, Linux kernel developers are scrambling to optimize key network paths in the kernel to keep up. Many efforts are actually geared toward getting traffic out of the costly Linux TCP stack. We have already covered the XDP (eXpress Data Path) patch set, but two new ideas surfaced during the Netconf and Netdev conferences held in Toronto and Montreal in early April 2017. One is a patch set called af_packet, which aims at extracting raw packets from the kernel as fast as possible; the other is the idea of implementing in-kernel layer-7 proxying. There are also user-space network stacks like Netmap, DPDK, or Snabb (which we previously covered). This article aims at clarifying what all those components do and to provide a short status update for the tools we have already covered. We will focus on in-kernel solutions for now. Indeed, user-space tools have a fundamental limitation: if they need to re-inject packets onto the network, they must again pay the expensive cost of crossing the kernel barrier. User-space performance is effectively bounded by that fundamental design. So we'll focus on kernel solutions here. We will start from the lowest part of the stack, the af_packet patch set, and work our way up the stack all the way up to layer-7 and in-kernel proxying.

af_packet v4 John Fastabend presented a new version of a patch set that was first published in January regarding the af_packet protocol family, which is currently used by tcpdump to extract packets from network interfaces. The goal of this change is to allow zero-copy transfers between user-space applications and the NIC (network interface card) transmit and receive ring buffers. Such optimizations are useful for telecommunications companies, which may use it for deep packet inspection or running exotic protocols in user space. Another use case is running a high-performance intrusion detection system that needs to watch large traffic streams in realtime to catch certain types of attacks. Fastabend presented his work during the Netdev network-performance workshop, but also brought the patch set up for discussion during Netconf. There, he said he could achieve line-rate extraction (and injection) of packets, with packet rates as high as 30Mpps. This performance gain is possible because user-space pages are directly DMA-mapped to the NIC, which is also a security concern. The other downside of this approach is that a complete pair of ring buffers needs to be dedicated for this purpose; whereas before packets were copied to user space, now they are memory-mapped, so the user-space side needs to process those packets quickly otherwise they are simply dropped. Furthermore, it's an "all or nothing" approach; while NIC-level classifiers could be used to steer part of the traffic to a specific queue, once traffic hits that queue, it is only accessible through the af_packet interface and not the rest of the regular stack. If done correctly, however, this could actually improve the way user-space stacks access those packets, providing projects like DPDK a safer way to share pages with the NIC, because it is well defined and kernel-controlled. According to Jesper Dangaard Brouer (during review of this article):
This proposal will be a safer way to share raw packet data between user space and kernel space than what DPDK is doing, [by providing] a cleaner separation as we keep driver code in the kernel where it belongs.
During the Netdev network-performance workshop, Fastabend asked if there was a better data structure to use for such a purpose. The goal here is to provide a consistent interface to user space regardless of the driver or hardware used to extract packets from the wire. af_packet currently defines its own packet format that abstracts away the NIC-specific details, but there are other possible formats. For example, someone in the audience proposed the virtio packet format. Alexei Starovoitov rejected this idea because af_packet is a kernel-specific facility while virtio has its own separate specification with its own requirements. The next step for af_packet is the posting of the new "v4" patch set, although Miller warned that this wouldn't get merged until proper XDP support lands in the Intel drivers. The concern, of course, is that the kernel would have multiple incomplete bypass solutions available at once. Hopefully, Fastabend will present the (by then) merged patch set at the next Netdev conference in November.

XDP updates Higher up in the networking stack sits XDP. The af_packet feature differs from XDP in that it does not perform any sort of analysis or mangling of packets; its objective is purely to get the data into and out of the kernel as fast as possible, completely bypassing the regular kernel networking stack. XDP also sits before the networking stack except that, according to Brouer, it is "focused on cooperating with the existing network stack infrastructure, and on use-cases where the packet doesn't necessarily need to leave kernel space (like routing and bridging, or skipping complex code-paths)." XDP has evolved quite a bit since we last covered it in LWN. It seems that most of the controversy surrounding the introduction of XDP in the Linux kernel has died down in public discussions, under the leadership of David Miller, who heralded XDP as the right solution for a long-term architecture in the kernel. He presented XDP as a fast, flexible, and safe solution. Indeed, one of the controversies surrounding XDP was the question of the inherent security challenges with introducing user-provided programs directly into the Linux kernel to mangle packets at such a low level. Miller argued that whatever protections are expected for user-space programs also apply to XDP programs, comparing the virtual memory protections to the eBPF (extended BPF) verifier applied to XDP programs. Those programs are actually eBPF that have an interesting set of restrictions:
  • they have a limited size
  • they cannot jump backward (and thus cannot loop), so they execute in predictable time
  • they do only static allocation, so they are also limited in memory
XDP is not a one-size-fits-all solution: netfilter, the TC traffic shaper, and other normal Linux utilities still have their place. There is, however, a clear use case for a solution like XDP in the kernel. For example, Facebook and Cloudflare have both started testing XDP and, in Facebook's case, deploying XDP in production. Martin Kafai Lau, from Facebook, presented the tool set the company is using to construct a DDoS-resilience solution and a level-4 load balancer (L4LB), which got a ten-times performance improvement over the previous IPVS-based solution. Facebook rolled out its own user-space solution called "Droplet" to detect hostile traffic and deploy blocking rules in the form of eBPF programs loaded in XDP. Lau demonstrated the way Facebook deploys a three-part chained eBPF program: the first part allows debugging and dumping of packets, the second is Droplet itself, which drops undesirable traffic, and the last segment is the load balancer, which mangles the packets to tweak their destination according to internal rules. Droplet can drop DDoS attacks at line rate while keeping the architecture flexible, which were two key design requirements. Gilberto Bertin, from Cloudflare, presented a similar approach: Cloudflare has a tool that processes sFlow data generated from iptables in order to generate cBPF (classic BPF) mitigation rules that are then deployed on edge routers. Those rules are created with a tool called bpfgen, part of Cloudflare's BSD-licensed bpftools suite. For example, it could create a cBPF bytecode blob that would match DNS queries to any example.com domain with something like:
    bpfgen dns *.example.com
Originally, Cloudflare would deploy those rules to plain iptables firewalls with the xt_bpf module, but this led to performance issues. It then deployed a proprietary user-space solution based on Solarflare hardware, but this has the performance limitations of user-space applications getting packets back onto the wire involves the cost of re-injecting packets back into the kernel. This is why Cloudflare is experimenting with XDP, which was partly developed in response to the company's problems, to deploy those BPF programs. A concern that Bertin identified was the lack of visibility into dropped packets. Cloudflare currently samples some of the dropped traffic to analyze attacks; this is not currently possible with XDP unless you pass the packets down the stack, which is expensive. Miller agreed that the lack of monitoring for XDP programs is a large issue that needs to be resolved, and suggested creating a way to mark packets for extraction to allow analysis. Cloudflare is currently in a testing phase with XDP and it is unclear if its whole XDP tool chain will be publicly available. While those two companies are starting to use XDP as-is, there is more work needed to complete the XDP project. As mentioned above and in our previous coverage, massive statistics extraction is still limited in the Linux kernel and introspection is difficult. Furthermore, while the existing actions (XDP_DROP and XDP_TX, see the documentation for more information) are well implemented and used, another action may be introduced, called XDP_REDIRECT, which would allow redirecting packets to different network interfaces. Such an action could also be used to accelerate bridges as packets could be "switched" based on the MAC address table. XDP also requires network driver support, which is currently limited. For example, the Intel drivers still do not support XDP, although that should come pretty soon. Miller, in his Netdev keynote, focused on XDP and presented it as the standard solution that is safe, fast, and usable. He identified the next steps of XDP development to be the addition of debugging mechanisms, better sampling tools for statistics and analysis, and user-space consistency. Miller foresees a future for XDP similar to the popularization of the Arduino chips: a simple set of tools that anyone, not just developers, can use. He gave the example of an Arduino tutorial that he followed where he could just look up a part number and get easy-to-use instructions on how to program it. Similar components should be available for XDP. For this purpose, the conference saw the creation of a new mailing list called xdp-newbies where people can learn how to create XDP build environments and how to write XDP programs.

In-kernel layer-7 proxying The third approach that struck me as innovative is the idea of doing layer-7 (application) proxying directly in the kernel. This comes from the idea that, traditionally, we build firewalls to segregate traffic and apply controls, but as most services move to HTTP, those policies become ineffective. Thomas Graf, presented this idea during Netconf using a Star Wars allegory: what if the Death Star were a server with an API? You would have endpoints like /dock or /comms that would allow you to dock a ship or communicate with the Death Star. Those API endpoints should obviously be public, but then there is this /exhaust-port endpoint that should never be publicly available. In order for a firewall to protect such a system, it must be able to inspect traffic at a higher level than the traditional address-port pairs. Graf presented a design where the kernel would create an in-kernel socket that would negotiate TCP connections on behalf of user space and then be able to apply arbitrary eBPF rules in the kernel. Graf's design of in-kernel proxying In this scenario, instead of doing the traditional transfer from Netfilter's TPROXY to user space, the kernel directly decapsulates the HTTP traffic and passes it to BPF rules that can make decisions without doing expensive context switches or memory copies in the case of simply wanting to refuse traffic (e.g. issue an HTTP 403 error). This, of course, requires the inclusion of kTLS to process HTTPS connections. HTTP2 support may also prove problematic, as it multiplexes connections and is harder to decapsulate. This design was described as a "pure pre-accept() hook". Starovoitov also compared the design to the kernel connection multiplexer (KCM). Tom Herbert, KCM's author, agreed that it could be extended to support this, but would require some extensions in user space to provide an interface between regular socket-based applications and the KCM layer. In any case, if the application does TLS (and lots of them do), kTLS gets tricky because it breaks the end-to-end nature of TLS, in effect becoming a man in the middle between the client and the application. Eric Dumazet argued that HA-Proxy already does things like this: it uses splice() to avoid copying too much data around, but it still does a context switch to hand over processing to user space, something that could be fixed in the general case. Another similar project that was presented at Netdev is the Tempesta firewall and reverse-proxy. The speaker, Alex Krizhanovsky, explained the Tempesta developers have taken one person month to port the mbed TLS stack to the Linux kernel to allow an in-kernel TLS handshake. Tempesta also implements rate limiting, cookies, and JavaScript challenges to mitigate DDoS attacks. The argument behind the project is that "it's easier to move TLS to the kernel than it is to move the TCP/IP stack to user space". Graf explained that he is familiar with Krizhanovsky's work and he is hoping to collaborate. In effect, the design Graf is working on would serve as a foundation for Krizhanovsky's in-kernel HTTP server (kHTTP). In a private email, Graf explained that:
The main differences in the implementation are currently that we foresee to use BPF for protocol parsing to avoid having to implement every single application protocol natively in the kernel. Tempesta likely sees this less of an issue as they are probably only targeting HTTP/1.1 and HTTP/2 and to some [extent] JavaScript.
Neither project is really ready for production yet. There didn't seem to be any significant pushback from key network developers against the idea, which surprised some people, so it is likely we will see more and more layer-7 intelligence move into the kernel sooner rather than later.

Conclusion All of this work aims at replacing a rag-tag bunch of proprietary solutions that recently came up to bypass the Linux kernel TCP/IP stack and improve performance for firewalls, proxies, and other key edge network elements. The idea is that, unless the kernel improves its performance, or at least provides a way to bypass its more complex code paths, people will work around it. With this set of solutions in place, engineers will now be able to use standard APIs to hook high-performance systems into the Linux kernel.
The author would like to thank the Netdev and Netconf organizers for travel assistance, Thomas Graf for a review of the in-kernel proxying section of this article, and Jesper Dangaard Brouer for review of the af_packet and XDP sections. Note: this article first appeared in the Linux Weekly News.

24 March 2017

Gunnar Wolf: Dear lazyweb: How would you visualize..?

Dear lazyweb, I am trying to get a good way to present the categorization of several cases studied with a fitting graph. I am rating several vulnerabilities / failures according to James Cebula et. al.'s paper, A taxonomy of Operational Cyber Security Risks; this is a somewhat deep taxonomy, with 57 end items, but organized in a three levels deep hierarchy. Copying a table from the cited paper (click to display it full-sized): My categorization is binary: I care only whether it falls within a given category or not. My first stab at this was to represent each case using a star or radar graph. As an example: As you can see, to a "bare" star graph, I added a background color for each top-level category (blue for actions of people, green for systems and technology failures), red for failed internal processes and gray for external events), and printed out only the labels for the second level categories; for an accurate reading of the graphs, you have to refer to the table and count bars. And, yes, according to the Engineering Statistics Handbook:
Star plots are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming.
I strongly agree with the above statement And stating that "a few hundred points" can be understood is even an overstatement. 50 points are just too much. Now, trying to increase usability for this graph, I came across the Sunburst diagram. One of the proponents for this diagram, John Stasko, has written quite a bit about it. Now... How to create my beautiful Sunburst diagram? That's a tougher one. Even though the page I linked to in the (great!) Data visualization catalogue presents even some free-as-in-software tools to do this... They are Javascript projects that will render their beautiful plots (even including an animation)... To the browser. I need them for a static (i.e. to be printed) document. Yes, I can screenshot and all, but I want them to be automatically generated, so I can review and regenerate them all automatically. Oh, I could just write JSON and use SaaS sites such as Aculocity to do the heavy-lifting, but if you know me, you will understand why I don't want to. So... I set out to find a Gunnar-approved way to display the information I need. Now, as the Protovis documentation says, an icicle is simply a sunburst transformed from polar to cartesian coordinates... But I came to a similar conclusion: The tools I found are not what I need. OK, but an icicle graph seems much simpler to produce I fired up my Emacs, and started writing using Ruby, RMagick and RVG... I decided to try a different way. This is my result so far: So... What do you think? Does this look right to you? Clearer than the previous one? Worst? Do you have any idea on how I could make this better? Oh... You want to tell me there is something odd about it? Well, yes, of course! I still need to tweak it quite a bit. Would you believe me if I told you this is not really a left-to-right icicle graph, but rather a strangely formatted Graphviz non-directed graph using the dot formatter? I can assure you you don't want to look at my Graphviz sources... But in case you insist... Take them and laugh. Or cry. Of course, this file comes from a hand-crafted template, but has some autogenerated bits to it. I have still to tweak it quite a bit to correct several of its usability shortcomings, but at least it looks somewhat like what I want to achieve. Anyway, I started out by making a "dear lazyweb" question. So, here it goes: Do you think I'm using the right visualization for my data? Do you have any better suggestions, either of a graph or of a graph-generating tool? Thanks! [update] Thanks for the first pointer, Lazyweb! I found a beautiful solution; we will see if it is what I need or not (it is too space-greedy to be readable... But I will check it out more thoroughly). It lays out much better than anything I can spew out by myself Writing it as a mindmap using TikZ directly from within LaTeX, I get the following result:

24 December 2016

Russ Allbery: Review: Warchild

Review: Warchild, by Karin Lowachee
Series: Warchild #1
Publisher: Warner Aspect
Copyright: April 2002
ISBN: 0-446-61077-1
Format: Mass market
Pages: 451
In a future world of deep space stations and starship trade routes, Jos Musey grew up on a merchant ship with a loving family and typical childhood companions. But, at the age of eight, his ship was taken by pirates and he's taken as a slave. That might have been the end of his story, but after a year of captivity he manages to escape during an alien attack on a distant frontier station. Jos then learns more than he ever expected to learn about the ongoing deep space war between the human military and the aliens and their human sympathizers. From both sides. Warchild feels so much like a collection of 1980s SF tropes that I'm a bit surprised it was published in 2002. Some of those have been part of SF well before the 1980s: the coming-of-age story of a child in space, deep-space combat and merchant fleets, pirates, and sketchy stations. But when one adds the Japanese-inspired philosophy and combat training, with a bit of Karate Kid feel, plus the (oddly bolted on) cyberpunk "burndiving," this book feels deeply embedded in a specific generation of SF storytelling. That's not necessarily a drawback. I like some of those tropes. The martial arts training coupled with careful and patient psychology worked very well for me. It may be a bit stereotyped, but Lowachee is careful to never present it as Asian; it's an alien philosophy and environment, and although it happens to wear its influences on its sleeves, it makes no attempt to tie that to any particular human culture. And the philosophy and, more to the point, the approach Niko takes with Jos is exactly what Jos needs. That section of the book (the second) was by far my favorite. I wish the whole book had been like that. Unfortunately, it's not. The first part is a deeply uncomfortable account of Jos's capture and enslavement (with bonus implied pedophilia). It's thankfully the shortest section of the book, but it's an endless parade of horrors that I didn't enjoy reading. Lowachee took the stylistic choice of writing it in the second person, which is a literary trick that rarely works for me and didn't work here. I'm sure the goal is to make it feel more immediate, but I didn't need this scene to be more immediate, and second person always reads as awkward and forced. If the authors write characters well, I will identify with them, but if I feel like I'm being forced to identify with them, I just start getting irritated. The third part of the book goes in yet a different direction: military SF, complete with hazing, camaraderie, esprit de corps, and bloody combat, with an uncomfortable undertone of constant stress due to Jos's complex and dangerous position. I wanted this to be much shorter and wanted the book to return to the part that I really liked. Unfortunately, that's not to be; the tone of this section is the tone for the rest of the book. To be fair, it's better than I expected it to be, and Jos's recovery and coming-of-age continues in more subtle and more satisfying ways than at first it seemed like it would. But Lowachee complicates and largely breaks a recovery that I was hoping would proceed down a more peaceful path, and replaced a beautiful and interesting (if a bit stereotyped) environment with bog-standard military SF. If you like that sort of thing, there's a lot of that thing here, but I've read a lot of books with that setting and far fewer about an Asian-inspired martial alien philosophy. I think Warchild has a bit too much stuff going on and not enough recovery space. The cyberpunk angle probably gets developed more in later books of the series (the next book is Burndive, which is the name for cyberpunk hacking in this book), but it felt bolted on here. Jos's story has multiple false starts and complications, and Lowachee keeps pulling the rug out from under him again until both he and the reader go a bit numb. The ending mostly works, but it's a brutal resolution to the complex psychological situation Lowachee sets up. This book reminds me a bit of C.J. Cherryh in that the characters seem constantly stressed beyond their ability to cope. I wanted something a bit kinder and softer. Despite that, the psychology and the brief moments of understanding and light are compelling enough that I'm still tempted to read on in this series. The subsequent books follow other characters; maybe they'll be a bit less nasty to their protagonists. Followed by Burndive. Rating: 6 out of 10

7 December 2016

Jonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned. In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row. Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself. Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:
It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not. There are many "levels" of physical access. [...] In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations. And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.
While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts. The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least. Hector and Ismael argue that the default should be changed for enhanced security:
[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.
For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree. But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well. To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs. But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion. For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual: After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached. 2. tries=n option ignored, local brute-force slightly cheaper Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords. Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug. The reason for the bug was twofold:
  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:
    setup_mapping()
     
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...]
    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:
    setup_mapping()
     
      [...]
      while [ $crypttries -le 0 ]   [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
     
    
    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!
  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks. In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid. The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function. The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned. Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes. But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System. Honestly, this headline is missleading - if not wrong - in several ways:
  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.
Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer. If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all. After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

3 December 2016

Vincent Bernat: Build-time dependency patching for Android

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.
buildscript  
    dependencies  
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
     
 
Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context This section adds some context to the example. Feel free to skip it. Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1. Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:
Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.
The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2. Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement Let s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:
// In SslUtil class
public static boolean shouldDenyRequest(int error)  
    return false;
 

Transform registration Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:
import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger
class PatchXWalkTransform extends Transform  
    Logger logger = null;
    public PatchXWalkTransform(Logger logger)  
        this.logger = logger
     
    @Override
    String getName()  
        return "PatchXWalk"
     
    @Override
    Set<QualifiedContent.ContentType> getInputTypes()  
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
     
    @Override
    Set<QualifiedContent.Scope> getScopes()  
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
     
    @Override
    boolean isIncremental()  
        return true
     
    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException  
        // We should do something here
     
 
// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))
The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources. The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It s also possible to transform our own classes. The isIncremental() method returns true because we support incremental builds. The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform To keep all external dependencies unmodified, we must copy them:
@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException  
    inputs.each  
        it.jarInputs.each  
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED)   //  
                logger.info("Remove $ src ")
                FileUtils.delete(dest)
              else if (!isIncremental   status != Status.NOTCHANGED)   //  
                logger.info("Copy $ src ")
                FileUtils.copyFile(src, dest)
             
         
     
 
We also need two additional imports:
import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils
Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed ( ) or it has been modified ( ). In all other cases, we can safely assume the file is already correctly copied.

JAR patching When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ):
if ("$ src " ==~ ".*/org.xwalk/xwalk_core.*/classes.jar")  
    def pool = new ClassPool()
    pool.insertClassPath("$ src ")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') //  
    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) //  
    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error)  
    return false;
 
""", ctc)) //  
    def sslUtilBytecode = ctc.toBytecode() //  
    // Write back the JAR file
    //  
  else  
    logger.info("Copy $ src ")
    FileUtils.copyFile(src, dest)
 
We also need the following additional imports to use Javassist:
import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod
Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in ( ). We locate the appropriate method and delete it ( ). Then, we add our custom method using the same name ( ). The whole operation is done in memory. We retrieve the bytecode of the modified class in . The remaining step is to rebuild the JAR file:
def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))
//  
input.entries().each  
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class"))  
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
     
 
//  
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)
output.close()
We need the following additional imports:
import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils
There are two steps. In , all classes are copied to the new JAR, except the SslUtil class. In , the modified bytecode for SslUtil is added to the JAR. That s all! You can view the complete example on GitHub.

More complex method replacement In the above example, the new method doesn t use any external dependency. Let s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:
import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;
// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url)  
    switch(error)  
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
     
    return new SslError(SslError.SSL_INVALID, cert, url);
 
The major difference with the previous example is that we need to import some additional classes.

Android SDK import The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:
androidJar = "$ android.getSdkDirectory().getAbsolutePath() /platforms/" +
             "$ android.getCompileSdkVersion() /android.jar"
We need to load it before adding the new method into SslUtil class:
def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("$ src ")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
//  

External dependency import We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.
def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each  
    it.jarInputs.each  
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED)  
            pool.insertClassPath("$ src ")
         
     
 
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make(" "))
// Then, rebuild the JAR...
Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on.
  2. I hope to have this fixed in later versions.
  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk.
  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources.

Next.

Previous.